In order to measure the similarity of multi-relational nodes and mine the community structure with multi-relational nodes, a community mining algorithm based on multi-relationship of nodes, called LSL-GN, was proposed. Firstly, based on node similarity and node reachability, LHN-ISL, a similarity measurement index for multi-relational nodes, was described to reconstruct the low-density model of the target network, and the community division was completed by combining with GN (Girvan-Newman) algorithm. The LSL-GN algorithm was compared with several classical community mining algorithms on Modularity (Q value), Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI). The results show that LSL-GN algorithm achieves the best results in terms of three indexes, indicating that the community division quality of LSL-GN is better. The “User-Application” mobile roaming network model was divided by LSL-GN algorithm into community structures based on basic applications such as Ctrip, Amap and Didi Travel. These results of community division can provide strategic reference information for designing personalized package services.
The identification of key nodes in complex network plays an important role in the optimization of network structure and effective propagation of information. Local structural Entropy (LE) can be used to identify key nodes by using the influence of the local network on the whole network instead of the influence of nodes on the whole network. However, the cases of the highly aggregative network and nodes forming a loop with neighbor nodes are not considered in LE, which leads to some limitations. To address these limitations, firstly, an improved LE based node importance evaluation method, namely PLE (Penalized Local structural Entropy), was proposed, in which based on the LE, the Clustering Coefficient (CC) was introduced as a penalty term to penalize the highly aggregative nodes in the network appropriately. Secondly, due to the fact that the penalty of PLE penalizing the nodes in triadic closure structure is too much, an improved method of PLE, namely PLEA (Penalized Local structural Entropy Advancement) was proposed, in which control coefficient was introduced in front of the penalty term to control the penalty strength. Selective attack experiments on five real networks with different sizes were conducted. Experimental results show that in the western US states grid and the US Airlines, PLEA has the identification accuracy improved by 26.3% and 3.2% compared with LE respectively, by 380% and 5.43% compared with K-Shell (KS) method respectively, and by 14.4% and 24% compared with DCL (Degree and Clustering coefficient and Location) method respectively. The key nodes identified by PLEA can cause more damage to the network, verifying the rationality of introducing the CC as a penalty term, and the effectiveness and superiority of PLEA. The integration of the number of neighbors and the local network structure of nodes with the simplicity of computation makes it more effective in describing the reliability and invulnerability of large-scale networks.
When measuring the importance of attributes in Tensor-based Multiple Clustering algorithm (TMC), the relevance of attribute combinations within object tensors are ignored, and the selected and unselected feature space are incompletely separated because of the fixed weight strategy under different feature space selection. For above problems, a Multiple Clustering algorithm based on Dynamic Weighted Tensor Distance (DWTD-MC) was proposed. Firstly, a self-association tensor model was constructed to improve the accuracy of attribute importance measurement of each feature space. Then, a multi-view weight tensor model was built to meet the task requirements of multiple clustering analysis by dynamic weighting strategy under different feature space selection. Finally, the dynamic weighted tensor distance was used to measure the similarity of data points, generating multiple clustering results. Simulation results on real datasets show that DWTD-MC outperforms comparative algorithms such as TMC in terms of Jaccard Index (JI), Dunn Index (DI), Davies-Bouldin index (DB) and Silhouette Coefficient (SC). It can obtain high quality clustering results while maintaining low redundancy among clustering results, as well as meeting the task requirements of multiple clustering analysis.
A Hashgraph-based data management method for building Internet of Things (IoT) was proposed to address the problems of severe lack of throughput and high response delay when applying blockchain to the building IoT scenarios. In this method, Directed Acyclic Graph (DAG) was used for data storage to increase the throughput performance of blockchain because of the high concurrency of schematic structure; Hashgraph algorithm was applied to reach consensus on the data stored in DAG to reduce the time consumption of consensus; the smart contracts were designed to realize access control to prevent unauthorized users from operating data. Caliper, a blockchain performance testing tool, was adopted for performance test. The results show that in a medium-scale simulation environment with 32 nodes, the throughput of the proposed method is 1 063.1 transactions per second, which is 6 times and 3 times than that of the edge computing and the cross-chain methods; the data storage delay and control delay of the proposed method are 4.57 seconds and 4.92 seconds respectively, indicating that the proposed method has the response speed better than the comparison methods; and the transaction success rate of this method reaches 87.4% in spike testing. At the same time, the prototype system based on this method can run stably for 120 hours in stability testing. The above illustrates that the proposed method can effectively improve the throughput and response speed of blockchain, and meets actual needs in the building IoT scenarios.
Concerning the problem that the traditional access control methods face single point of failure and fail to provide trusted, secure and dynamic access management, a new access control model based on blockchain and smart contract for Wireless Sensor Network (WSN) was proposed to solve the problems of access dynamics and low level of intelligence of existing blockchain-based access control methods. Firstly, a new access control architecture based on blockchain was proposed to reduce the network computing overhead. Secondly, a multi-level smart contract system including Agent Contract (AC), Authority Management Contract (AMC) and Access Control Contract (ACC) was built, thereby realizing the trusted and dynamic access management of WSN. Finally, the dynamic access generation algorithm based on Radial Basis Function (RBF) neural network was adopted, and access policy was combined to generate the credit score threshold of access node to realize the intelligent, dynamic access control management for the large number of sensors in WSN. Experimental results verify the availability, security and effectiveness of the proposed model in WSN secure access control applications.
Concerning the problems of artifacts and loss of image details in the analytically reconstructed image by time-domain filters, a new time-frequency domain Computed Tomography (CT) reconstruction algorithm based on Convolutional Neural Network (CNN) was proposed. Firstly, a filter network based on a convolutional neural network was constructed in the frequency domain to achieve the frequency-domain filtering of the projection data. Secondly, the back-projection operator was used to perform domain conversion on the frequency-domain filtered result to obtain a reconstructed image. A network was constructed in the image domain to process the image from the back-projection layer. Finally, a multi-scale structural similarity loss function was introduced on the basis of the minimum mean square error loss function to form a composite loss function, which reduced the blur effect of the neural network on the result image and preserved the details of the reconstructed image. The image domain network and the projection domain filter network worked together to finally get the reconstructed result. The effectiveness of the proposed algorithm was verified on the clinical dataset. Compared with the Filtered Back Projection (FBP) algorithm, the Total Variation (TV) algorithm and the image domain Residual Encoder-Decoder CNN (RED-CNN) algorithm, when the number of projections is respectively 180 and 90, the proposed algorithm achieved the reconstructed result image with highest Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM), and the least Normalized Mean Square Error (NMSE).When the number of projections is 360,the proposed algorithm is second only to TV algorithm. The experimental results show that the proposed algorithm can improve the reconstructed image quality of CT image, and it is feasible and effective.
Microservice invocation link data is a type of important data generated in the daily operation of the microservice application system, which records a series of service invocation information corresponding to a user request in the microservice application in the form of link. Microservice invocation link data are generated at different microservice deployment nodes due to the distribution characteristic of the system, and the current collection methods for these distributed data include full collection and sampling collection. Full collection may bring large data transmission and data storage costs, while sampling collection may miss critical invocation data. Therefore, an event?driven and pipeline sampling based dynamic collection method for microservice invocation link data was proposed, and a microservice invocation link system that supports dynamic collection of invocation link data was designed and implemented based on the open?source software Zipkin. Firstly, the pipeline sampling was performed on the link data of different nodes that met the predefined event features, that is the same link data of all nodes were collected by the data collection server only when the event defined data was generated by a node; meanwhile, to address the problem of inconsistent data generation rates of different nodes, multi?threaded streaming data processing technology based on time window and data synchronization technology were used to realize the data collection and transmission of different nodes. Finally, considering the problem that the link data of each node arrives at the server in different sequential order, the synchronization and summary of the full link data were realized through the timing alignment method. Experimental results on the public microservice lrevocation ink dataset prove that compared to the full collection and sampling collection methods, the proposed method has higher accuracy and more efficient collection on link data containing specific events such as anomalies and slow responces.
Ensemble resampling technology can solve the problem of imbalanced samples in financial early warning research to some extent. Different ensemble models and different ensemble resampling technologies have different suitabilities. It is found in the study that Up-Down ensemble sampling and Tomek-Smote ensemble sampling were respectively suitable for Bagging-Vote ensemble model and Stacking fusion model. Based on the above, a Stacking-Bagging-Vote (SBV) multi-source information fusion model was built. Firstly, the Bagging-Vote model based on Up-Down ensemble sampling and the Stacking model based on Tomek-Smote sampling were fused. Then, the stock trading data were added and processed by Kalman filtering, so that the interactive fusion optimization of data level and model level was realized, and the SBV multi-source information fusion model was finally obtained. This fusion model not only has a great improvement in the prediction performance by taking into account prediction accuracy and prediction precision simultaneously, but also can select the corresponding SBV multi-source information fusion model to perform the financial early warning to meet the actual needs of different stakeholders by adjusting the parameters of the model.
In the cluster-based routing algorithm of Wireless Sensor Network (WSN), "energy hole" phenomenon was resulted from energy consumption imbalance between sensors. For this problem, a hybrid multi-hop routing algorithm of effective energy-hole avoidance was put forward on the basis of the research of the flat and hierarchical routing protocols. Firstly, the concept of hotspot area was introduced to divide the monitoring area, and then in clustering stage, the amount of data outside the hotspot area was reduced by using uneven clustering algorithm which could integrate data within the clusters. Secondly, energy consumption was cut down in the hotspot area during clustering stage by no clustering. Finally, in inter-cluster communication phase, the Particle Swarm Optimization (PSO) algorithm was addressed to seek optimal transmission path which could simultaneously meet the minimization of the maximum next hop distance between two nodes in the routing path and the minimization of the maximum hop count, so the minimization of whole network energy consumption was realized. Theoretical analysis and experimental results show that, compared with the Reinforcement-Learning-based Lifetime Optimal routing protocol (RLLO) and Multi-Layer routing protocol through Fuzzy logic based Clustering mechanism (MLFC) algorithm, the proposed algorithm shows better performance in energy efficiency and energy consumption uniformity, and the network lifetime is raised by 20.1% and 40.5%, which can avoid the "energy hole" effectively.
Towards the large frequency offset caused by Doppler effect in high speed moving environment, a dynamic state space model of Orthogonal Frequency Division Multiplexing (OFDM) was built, and a kind of frequency offset tracking and estimation algorithm in OFDM based on improved Strong Tracking Unscented Kalman Filter (STUKF) was proposed. By combining strong tracking filter theory and UKF together, the fading factor was introduced during the process of calculating the measurement predictive covariance and cross covariance. The frequency offset estimation error covariance was adjusted; meanwhile, the process noise covariance was also controlled, and the gain matrix was adjusted in real-time. So the tracking ability to time-varying frequency offset was enhanced and the estimated accuracy was raised. The simulation test was carried out in time-invariant and time-varying frequency offset models. The simulation results show that the proposed algorithm has better tracking and estimation performance than the UKF frequency offset estimation algorithm, the Signal-to-Noise Ratio (SNR) raises about 1dB under the same Bit Error Rate (BER).
In the domain of structural pattern recognition, the existing graph embedding methods lack versatility and have high computation complexity. A new graph embedding method integrated with multiscale features based on space syntax theory was proposed to solve this problem. This paper extracted the global, local and detail features to construct feature vector depicting the graph feature by multiscale histogram. The global features included vertex number, edge number, and intelligible degree. The local features referred to node topological feature, edge domain features dissimilarity and edge topological features dissimilarity. The detail features comprised numerical and symbolic attributes on vertex and edge. In this way, the structural pattern recognition was converted into statistical pattern recognition, thus Support Vector Machine (SVM) could be applied to achieve graph classification. The experimental results show that the proposed graph embedding method can achieve higher classifying accuracy in different graph datasets. Compared with other graph embedding methods, the proposed method can adequately render the graphs topology, merge the non-topological features in terms of the graphs domain property, and it has a favorable universality and low computation complexity.
To estimate the frequency offset in Orthogonal Frequency Division Multiplexing (OFDM) system, a novel blind frequency offset estimation algorithm based on Particle Swarm Optimization (PSO) method was proposed. Firstly the mathematical model and cost function were designed according to the principle of minimum reconstruction error of the reconstructed signal and the signal actually received. The powerful random, parallel, global search property of PSO was utilized to minimize the cost function to get the frequency offset estimation. Two inertia weight strategies for PSO algorithm of constant coefficient and differential descending were simulated, and comparison was made with the minimum output variance and gold section methods. The simulation results show that the proposed algorithm performs highly accuracy, about one order of magnitude higher than other similar algorithms in same Signal-to-Noise Ratio (SNR) and it is not restricted by modulation type and frequency estimation range (-0.5,0.5).